Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
One-shot video-based person re-identification with multi-loss learning and joint metric
Yuchang YIN, Hongyuan WANG, Li CHEN, Zundeng FENG, Yu XIAO
Journal of Computer Applications    2022, 42 (3): 764-769.   DOI: 10.11772/j.issn.1001-9081.2021040788
Abstract288)   HTML9)    PDF (710KB)(99)       Save

In order to solve the problem of huge labeling cost for person re-identification, a method of one-shot video-based person re-identification with multi-loss learning and joint metric was proposed. Aiming at the problem that the number of label samples is small and the model obtained is not robust enough, a Multi-Loss Learning (MLL) strategy was proposed. In each training process, different loss functions were used for different data to optimize and improve the discriminative ability of the model. Secondly, a Joint Distance Metric (JDM) was proposed for label estimation, which combined the sample distance and the nearest neighbor distance to further improve the accuracy of pseudo label prediction. JDM solved the problems of the low accuracy of label estimation for unlabeled data, and the instability in the training process caused by the unlabeled data not fully utilized. Experimental results show that compared with the one-shot progressive learning method PL (Progressive Learning), the rank-1 accuracy reaches 65.5% and 76.2% on MARS and DukeMTMC-VideoReID datasets when the ratio of pseudo label samples added per iteration is 0.10, with the improvement of the proposed method of 7.6 and 5.2 percentage points, respectively.

Table and Figures | Reference | Related Articles | Metrics
Cloth-changing person re-identification based on joint loss capsule network
Qian LIU, Hongyuan WANG, Liang CAO, Boyan SUN, Yu XIAO, Ji ZHANG
Journal of Computer Applications    2021, 41 (12): 3596-3601.   DOI: 10.11772/j.issn.1001-9081.2021061090
Abstract310)   HTML14)    PDF (610KB)(145)       Save

Current research on Person Re-Identification (Re-ID) mainly concentrates on short-term situations with person’s clothing usually unchanged. However, more common practical cases are long-term situations, in which a person has higher possibility to change his clothes, which should be considered by Re-ID models. Therefore, a method of person re-identification with cloth changing based on joint loss capsule network was proposed. The proposed method was based on ReIDCaps, a capsule network for cloth-changing person re-identification. In the method, vector-neuron capsules that contain more information than traditional scalar neurons were used. The length of the vector-neuron capsule was used to represent the identity information of the person, and the direction of the capsule was used to represent the clothing information of the person. Soft Embedding Attention (SEA) was used to avoid the model over-fitting. Feature Sparse Representation (FSR) mechanism was adopted to extract discriminative features. The joint loss of label smoothing regularization cross-entropy loss and Circle Loss was added to improve the generalization ability and robustness of the model. Experimental results on three datasets including Celeb-reID, Celeb-reID-light and NKUP prove that the proposed method has certain advantages compared with the existing person re-identification methods.

Table and Figures | Reference | Related Articles | Metrics
Session-based recommendation model of multi-granular graph neural network
Junwei REN, Cheng ZENG, Siyu XIAO, Jinxia QIAO, Peng HE
Journal of Computer Applications    2021, 41 (11): 3164-3170.   DOI: 10.11772/j.issn.1001-9081.2021010060
Abstract505)   HTML25)    PDF (682KB)(233)       Save

Session-based recommendation aims to predict the user’s next click behavior based on the click sequence information of the current user’s anonymous session. Most of the existing methods realize recommendations by modeling the item information of the user’s session click sequence and learning the vector representation of the items. As a kind of coarse-grained information, the item category information can aggregate the items and can be used as an important supplement to the item information. Based on this, a Session-based Recommendation model of Multi-granular Graph Neural Network (SRMGNN) was proposed. Firstly, the embedded vector representations of items and item categories in the session sequence were obtained by using the Graph Neural Network (GNN), and the attention information of users was captured by using the attention network. Then, the items and item category information given by different weight values of attention were fused and input into the Gated Recurrent Unit (GRU). Finally, through GRU, the item time sequence information of the session sequence was learned, and the recommendation list was given. Experiments performed on the public Yoochoose dataset and Diginetica dataset verify the advantages of the proposed model with the addition of item category information, and show that the model has better effect compared with all the eight models such as Short-Term Attention/Memory Priority (STAMP), Neural Attentive session-based RecomMendation (NARM), GRU4REC on the evaluation indices Precision@20 and Mean Reciprocal Rank (MRR)@20.

Table and Figures | Reference | Related Articles | Metrics
Software defect number prediction method based on data oversampling and ensemble learning
JIAN Yiheng, YU Xiao
Journal of Computer Applications    2018, 38 (9): 2637-2643.   DOI: 10.11772/j.issn.1001-9081.2018020507
Abstract733)      PDF (1349KB)(361)       Save
Predicting the number of the defects in software modules can help testers pay more attention to the modules with more defects, thus reasonably allocating limited testing resource. Focusing on the issue that software defect datasets are imbalanced, a method based on oversampling and ensemble learning (abbreviate as SMOTENDEL) for predicting the number of defects was proposed in this paper. Firstly, n balanced datasets were obtained by oversampling the original software defect dataset n times. Then, n individual models for predicting the number of defects were trained on the n balanced datasets using regression algorithms. Finally, the n individual models were combined to obtain an ensemble prediction model, and the ensemble prediction model was used to predict the number of defects in a new software module. The experimental results show that SMOTENDEL has better performance than the original prediction method. When using Decision Tree Regression (DTR), Bayesian Ridge Regression (BRR) and Linear Regression (LR) as the individual prediction model, the improvement is 7.68%, 3.31% and 3.38%, respectively.
Reference | Related Articles | Metrics
Impact of regression algorithms on performance of defect number prediction model
FU Zhongwang, XIAO Rong, YU Xiao, GU Yi
Journal of Computer Applications    2018, 38 (3): 824-828.   DOI: 10.11772/j.issn.1001-9081.2017081935
Abstract761)      PDF (932KB)(424)       Save
Focusing on the issue that the existing studies do not consider the imbalanced data distribution problem in defect datasets and employ improper performance measures to evaluate the performance of regression models for predicting the number of defects, the impact of different regression algorithms on models for predicting the number of defects were explored by using Fault-Percentile-Average (FPA) as the performance measure. Experiments were conducted on six datasets from PROMISE repository to analyze the impact on the models and the difference of ten regression algorithms for predicting the number of defects. The results show that the forecast results of models for predicting the number of defects built by different regression algorithms are various, and gradient boosting regression algorithm and Bayesian ridge regression algorithm can achieve better performance as a whole.
Reference | Related Articles | Metrics
Elastic scheduling strategy for cloud resource based on Docker
PENG Liping, LYU Xiaodan, JIANG Chaohui, PENG Chenghui
Journal of Computer Applications    2018, 38 (2): 557-562.   DOI: 10.11772/j.issn.1001-9081.2017081943
Abstract582)      PDF (1012KB)(657)       Save
Considering the problem of elastic scheduling for cloud resources and the characteristics of Ceph data storage, a cloud resource elastic scheduling strategy based on Docker container was proposed. First of all, it was pointed out that the Docker container data volumes are unable to work across different hosts, which brings difficulty to apply online migration, then the data storage method of Ceph cluster was improved. Furthermore, a resource scheduling optimization model based on the comprehensive load of nodes was established. Finally, by combining the characteristics of Ceph cluster and Docker container, the Docker Swarm orchestration was used to achieve container deployment and application online migration in consideration of both data storage and cluster load. The experimental results show that compared with some scheduling strategies, the proposed scheduling strategy achieves elastic scheduling of the cloud platform resources by making a more granular partitioning of the cluster resources, makes a reasonable utilization of the cloud platform resources and reduces the cost of data center operations under the premise of ensuring the application performance.
Reference | Related Articles | Metrics
Clustering algorithm with maximum distance between clusters based on improved kernel fuzzy C-means
LI Bin, DI Lan, WANG Shaohua, YU Xiaotong
Journal of Computer Applications    2016, 36 (7): 1981-1987.   DOI: 10.11772/j.issn.1001-9081.2016.07.1981
Abstract352)      PDF (886KB)(344)       Save
General kernel clustering only concern relationship within clusters while ignoring the issue between clusters. Misclassification easily occurs when clustering data sets with fuzzy and noisy boundaries. To solve this problem, a new clustering algorithm was proposed based on Kernel Fuzzy C-Means (KFCM) clustering algorithm, which was called Kernel Fuzzy C-Means with Maximum distance between clusters (MKFCM). Considering the relationship between within-cluster elements and between-cluster elements, a penalty term representing the distance between centers in feature space and a control parameter were introduced. In this way, the distance between clustering centers was broadened and the samples near boundaries were better classified. Compared with traditional clustering algorithms, the experiments results on simulated data sets show that the proposed algorithm reduces the offset distance of clustering centers obviously. On man-made Gaussian data sets, the ACCuracy (ACC), Normalized Mutual Information (NMI) and Rand Index (RI) of the proposed algorithm were improved to 0.9132, 0.7575 and 0.9138. The proposed algorithm shows its theoretical research significance on data sets with fuzzy and noisy boundaries.
Reference | Related Articles | Metrics
Classification algorithm of support vector machine with privacy preservation based on information concentration
DI Lan, YU Xiaotong, LIANG Jiuzhen
Journal of Computer Applications    2016, 36 (2): 392-396.   DOI: 10.11772/j.issn.1001-9081.2016.02.0392
Abstract543)      PDF (862KB)(860)       Save
The classificationn decision process of Support Vector Machine (SVM) involves the study of original training samples, which easily causes privacy disclosure. To solve this problem, a classification approach with privacy preservation called IC-SVM (Information Concentration Support Vector Machine) was proposed based on information concentration. Firstly, the original training data was concentrated using Fuzzy C-Means (FCM) clustering algorithm according to each sample point and its neighbors. Then clustering centers were reconstructed to get new samples through information concentration. Finally, the new samples were trained to get decision function, by which classification was done. The experimental results on UCI and PIE show that the proposed method achieves good classification accuracy as well as preventing privacy disclosure.
Reference | Related Articles | Metrics
Mining mobility patterns based on deep representation model
CHEN Meng, YU Xiaohui, LIU Yang
Journal of Computer Applications    2016, 36 (1): 33-38.   DOI: 10.11772/j.issn.1001-9081.2016.01.0033
Abstract422)      PDF (960KB)(499)       Save
Focusing on the fact that the order of locations and time play a pivotal role in understanding user mobility patterns for spatio-temporal trajectories, a novel deep representation model for trajectories was proposed. The model considered the characteristics of spatio-temporal trajectories: 1) different orders of locations indicate different user mobility patterns; 2) trajectories tend to be cyclical and change over time. First, two time-ordered locations were combined in location sequence; second, the sequence and its corresponding time bin were combined in the temporal location sequence, which was the basic unit of describing the features of a trajectory; finally, the deep representation model was utilized to train the feature vector for each sequence. To verify the effectiveness of the deep representation model, experiments were designed to apply the temporal location sequence vectors to user mobility patterns mining, and empirical studies were performed on a real check-in dataset of Gowalla. The experimental results confirm that the proposed method is able to discover explicit movement patterns (e.g., working, shopping) and Word2Vec is difficult to discover the valuable patterns.
Reference | Related Articles | Metrics
3D segmentation method combining region growing and graph cut for coronary arteries computed tomography angiography images
JIANG Wei, LYU Xiaoqi, REN Xiaoying, REN Guoyin
Journal of Computer Applications    2015, 35 (5): 1462-1466.   DOI: 10.11772/j.issn.1001-9081.2015.05.1462
Abstract712)      PDF (814KB)(726)       Save

In order to solve the problems that the efficiency is low when the segment of three-dimensional Computed Tomography Angiography (CTA) coronary arteries images with complex structure and small region of interest, a segmentation algorithm combining region growing and graph cut was proposed. Firstly, a method of region growing based on threshold was used to divide images into several regions, which removed irrelevant pixels and simplified structure and protruded regions of interest. Afterwards, according to grey and space information, simplified images were constructed as a network diagram. Finally, network diagram was segmented with theory of graph cut, so the segmentation image of coronary arteries was got. The experimental results show that, compared with traditional graph cut, the increment for the segmentation efficiency is about 51.7%, which reduces the computational complexity. On the aspect of rendering quality, target areas for segmentation images of coronary arteries is complete, which is helpful for doctors to analyze the lesion correctly.

Reference | Related Articles | Metrics
Sequence images super-resolution reconstruction based on L 1 and L 2 mixed norm
LI Yinhui, LYU Xiaoqi, YU Hefeng
Journal of Computer Applications    2015, 35 (3): 840-843.   DOI: 10.11772/j.issn.1001-9081.2015.03.840
Abstract555)      PDF (706KB)(348)       Save

In order to filter out Gaussian noise and impulse noise at the same time, and get high resolution image in super-resolution reconstruction, a method with L1 and L2 mixed norm and Bilateral Total Variation (BTV) regularization was proposed for sequence images super-resolution. Firstly, multi-resolution optical flow model was used to register low-resolution sequence images and the registration precision was up to sub-pixel level, then the complementary information was used to raise image resolution. Secondly, taking advantage of L1 and L2 mixed norm, BTV regularization algorithm was used to solve the ill-posed problem. Lastly, the proposed algorithm was used to sequence images super-resolution. Experimental results show that the method can decrease the mean square error and increase Peak Signal-to-Noise Ratio (PSNR) by 1.2 dB to 5.2 dB. The algorithm can smooth Gaussian and impulse noise, protect image edge information and improve image identifiability, which provides good technique basis for license plate recognition, face recognition, video surveillance, etc.

Reference | Related Articles | Metrics
Curved planar reformation algorithm based on coronary artery outline extraction by multi-planar reformation
HOU He, LYU Xiaoqi, JIA Dongzheng, YU Hefeng
Journal of Computer Applications    2015, 35 (1): 211-214.   DOI: 10.11772/j.issn.1001-9081.2015.01.0211
Abstract1110)      PDF (784KB)(548)       Save

To solve the problems of three-dimensional clipping and Multi-Planar Reformation (MPR) that only the geometrical information of the tissues or organs can be obtained and the structure of a curving organ cannot be displayed in a single image, a Curved Planar Reformation (CPR) algorithm based on MPR to extract the outline was proposed to reform the coronary artery. Firstly, the discrete points expressing the outline of the coronary artery were extracted by using MPR. Afterwards, the Cardinal interpolation was used to get smooth outline fitting curve. Secondly, the outline was projected along the interested direction to get the scanning curved planar. Finally, the scanning curved planar corresponding to the volume data of the cardiac was displayed, so the CPR image of artery was got. The experimental results show that, compared with three-dimensional clipping method and three-dimensional data field method, the increment for the extracting speed of the coronary artery outline is about 4 to 6 frames per second, and the rendering time is shorter. On the aspect of rendering quality, compared with three-dimensional segmentation method, the image of coronary artery curved plane is clear and complete, which is helpful for doctors to analyze the lesion clearly and satisfies the demands of actual clinical diagnosis.

Reference | Related Articles | Metrics
Parallel text hierarchical clustering based on MapReduce
YU Xiaoshan WU Yangyang
Journal of Computer Applications    2014, 34 (6): 1595-1599.   DOI: 10.11772/j.issn.1001-9081.2014.06.1595
Abstract279)      PDF (930KB)(398)       Save

Concerning the deficiency in scalability of the traditional hierarchical clustering algorithm when dealing with large-scale text, a parallel hierarchical clustering algorithm based on the MapReduce programming model was proposed. The vertical data partitioning algorithm based on the statistical characteristic of the components group of text vector was developed for data partitioning in MapReduce. Additionally, the sorting characteristics of the MapReduce were applied to select the merge points, making the algorithm be more efficient and conducive to improve clustering accuracy. The experimental results show that the proposed algorithm is effective and has good scalability.

Reference | Related Articles | Metrics
New rapid algorithm for detecting girth of lowdensity Paritycheck codes
LI Jiong-cheng LI Gui-yu XIAO Heng-hui HUANG Hai-yi
Journal of Computer Applications    2012, 32 (11): 3100-3106.   DOI: 10.3724/SP.J.1087.2012.03100
Abstract1175)      PDF (493KB)(412)       Save
Concerning the girth problem of LowDensity ParityCheck(LDPC) codes, a new rapid algorithm for detecting the girth of LDPC codes in combination with Dijkstra algorithm and the feature of Tanner graph was proposed, and the time complexity of this algorithms was lower. Compared with known algorithms, this algorithm not only can calculate rapidly, but also return the girth and edges only one time, thus avoiding redundant computation. At last, the simulation verifies the feasibility and efficiency of this new algorithm.
Reference | Related Articles | Metrics